首页> 外文OA文献 >Compiler-Decided Dynamic Memory Allocation for Scratch-Pad Based Embedded Systems
【2h】

Compiler-Decided Dynamic Memory Allocation for Scratch-Pad Based Embedded Systems

机译:基于草稿板的嵌入式系统的编译器决定的动态内存分配

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

In this research we propose a highly predictable, low overhead andyet dynamic, memory allocation strategy for embedded systems withscratch-pad memory. A scratch-pad is a fast compiler-managedSRAM memory that replaces the hardware-managed cache. It ismotivated by its better real-time guarantees vs cache and by itssignificantly lower overheads in energy consumption, area andoverall runtime, even with a simple allocation scheme. Scratch-pad allocation methods primarilyare of two types. First, software-caching schemes emulate theworkings of a hardware cache in software. Instructions are insertedbefore each load/store to check the software-maintained cache tags.Such methods incur large overheads in runtime, code size, energyconsumption and SRAM space for tags and deliver poor real-timeguarantees, just like hardware caches. A second category ofalgorithms partitions variables at compile-time into the two banks.However, a drawback of such static allocation schemes is that theydo not account for dynamic program behavior. We propose a dynamic allocation methodology for global and stackdata and program code that (i) accounts for changing programrequirements at runtime (ii) has no software-caching tags (iii)requires no run-time checks (iv) has extremely low overheads, and(v) yields 100% predictable memory access times. In this methoddata that is about to be accessed frequently is copied into thescratch-pad using compiler-inserted code at fixed and infrequentpoints in the program. Earlier data is evicted if necessary. Whencompared to an existing static allocation scheme, results show thatour scheme reduces runtime by up to 39.8% and energy by up to31.3% on average for our benchmarks, depending on the SRAM sizeused. The actual gain depends on the SRAM size, but our resultsshow that close to the maximum benefit in run-time and energy isachieved for a substantial range of small SRAM sizes commonly foundin embedded systems. Our comparison with a direct mapped cache showsthat our method performs roughly as well as a cached architecture in runtime and energy while delivering better real-time benefits.
机译:在这项研究中,我们为具有便笺本式内存的嵌入式系统提出了一种高度可预测,低开销且动态的内存分配策略。暂存器是由编译器管理的快速SRAM存储器,它代替了硬件管理的缓存。即使是采用简单的分配方案,它的更好的实时保证与缓存以及能源消耗,区域和总体运行时间的显着降低的开销也为它带来了动力。暂存器分配方法主要有两种。首先,软件缓存方案模拟了软件中硬件缓存的工作。每次加载/存储之前都会插入指令以检查软件维护的缓存标签。此类方法会在运行时,代码大小,能耗和标签的SRAM空间方面产生大量开销,并且像硬件缓存一样,提供的实时保证也很差。第二类算法在编译时将变量划分为两个库。但是,这种静态分配方案的缺点是它们不能解决动态程序行为。我们为全局和堆栈数据以及程序代码提出了一种动态分配方法,该方法(i)考虑运行时更改程序要求(ii)没有软件缓存标签(iii)不需要运行时检查(iv)开销极低,并且(v)产生100%可预测的内存访问时间。在这种方法中,将要频繁访问的数据使用插入编译器的代码在程序中的固定点和不频繁点复制到暂存器中。如有必要,可以驱逐较早的数据。与现有的静态分配方案相比,结果表明,根据所用SRAM的大小,我们的方案平均可将运行时间减少39.8%,将能源平均减少31.3%。实际增益取决于SRAM的大小,但是我们的结果表明,对于嵌入式系统中通常存在的大量小型SRAM,可以在运行时间和能耗方面获得接近最大的收益。我们与直接映射的缓存的比较表明,我们的方法在运行时和性能上的表现与缓存架构大致相同,同时还提供了更好的实时优势。

著录项

  • 作者

    udayakumaran, sumesh;

  • 作者单位
  • 年度 2006
  • 总页数
  • 原文格式 PDF
  • 正文语种 en_US
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号